Ceph : Configure Cluster
2014/06/29 |
Install distributed storage Ceph.
For example, Configure Ceph Cluster like following environment. | +------------------+ | +-----------------+ | [ Admin Node ] |10.0.0.80 | 10.0.0.30| [ Client PC ] | | Ceph-Deploy |-----------+-----------| | | Meta Data Server | | | | +------------------+ | +-----------------+ | +---------------------------+--------------------------+ | | | |10.0.0.81 |10.0.0.82 |10.0.0.83 +-------+----------+ +--------+---------+ +--------+---------+ | [ Ceph Node #1 ] | | [ Ceph Node #2 ] | | [ Ceph Node #3 ] | | Monitor Daemon +-------+ Monitor Daemon +-------+ Monitor Daemon | | Object Storage | | Object Storage | | Object Storage | +------------------+ +------------------+ +------------------+ |
[1] | First, Configure like follows on all Ceph Nodes like Admin Node and Storage Nodes. Any user is OK to set in sudoers, it is used as a Ceph admin user after this. |
[cent@ceph-mds ~]$ [cent@ceph-mds ~]$ |
[2] | Create and send SSH key-pairs to connect to Ceph Nodes with non-passphrase. |
[cent@ceph-mds ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/cent/.ssh/id_rsa): # Enter Enter passphrase (empty for no passphrase): # Enter Enter same passphrase again: # Enter Your identification has been saved in /home/cent/.ssh/id_rsa. Your public key has been saved in /home/cent/.ssh/id_rsa.pub. The key fingerprint is: 8a:c1:0a:73:a6:81:4c:97:55:04:e5:be:37:db:22:3d cent@ceph-mds.srv.world The key's randomart image is: ... ...
[cent@ceph-mds ~]$
vi ~/.ssh/config # create new ( define all Ceph Nodes and user ) Host ceph-mds Hostname ceph-mds.srv.world User cent Host ceph01 Hostname ceph01.srv.world User cent Host ceph02 Hostname ceph02.srv.world User cent Host ceph03 Hostname ceph03.srv.world User cent
[cent@ceph-mds ~]$
chmod 600 ~/.ssh/config # send SSH public key to a Node [cent@ceph-mds ~]$ ssh-copy-id ceph01 cent@ceph01.srv.world's password: # password for the user Now try logging into the machine, with "ssh 'ceph01'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. # send to others too [cent@ceph-mds ~]$ ssh-copy-id ceph02 [cent@ceph-mds ~]$ ssh-copy-id ceph03 |
[3] | Configure Ceph Node on Admin Node. |
[cent@ceph-mds ~]$
[cent@ceph-mds ~]$ sudo sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/ceph.repo # configure cluster [cent@ceph-mds ceph]$ ceph-deploy new ceph01 ceph02 ceph03
# install Ceph on all Nodes [cent@ceph-mds ceph]$ ceph-deploy install ceph01 ceph02 ceph03
# initial configuration for monitoring and keys [cent@ceph-mds ceph]$ ceph-deploy mon create-initial |
[4] | Configure storage on Admin Node. On this example, create directories /storage01, /storage02, /storage03 on each Nodes ceph01, ceph02, ceph03 before settings below. |
# prepare Object Storage Daemon [cent@ceph-mds ceph]$ ceph-deploy osd prepare ceph01:/storage01 ceph02:/storage02 ceph03:/storage03 # activate Object Storage Daemon [cent@ceph-mds ceph]$ ceph-deploy osd activate ceph01:/storage01 ceph02:/storage02 ceph03:/storage03 # Configure Meta Data Server [cent@ceph-mds ceph]$ ceph-deploy admin ceph-mds [cent@ceph-mds ceph]$ ceph-deploy mds create ceph-mds # show status [cent@ceph-mds ceph]$ ceph mds stat e4: 1/1/1 up {0=ceph-mds=up:active} [cent@ceph-mds ceph]$ ceph health HEALTH_OK # turn to "OK" if it's no ploblem after few minutes later |